Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Image segmentation model without initial contour
LUO Qin, WANG Yan
Journal of Computer Applications    2021, 41 (4): 1179-1183.   DOI: 10.11772/j.issn.1001-9081.2020071058
Abstract296)      PDF (4070KB)(325)       Save
In order to enhance the robustness to initial contour as well as improve the segmentation efficiency for images with intensity inhomogeneity or noise, a region-based active contour model was proposed. First, a global intensity fitting force and a local intensity fitting force were designed separately. Then, the model's fitting term was obtained by the linear combination. And the weight between the two fitting forces were adjusted to improve the robustness of the model to the initial contour. Finally, the length term of evolution curve was employed to keep the smoothness of the curve. Experimental results show that compared with Region-Scalable Fitting(RSF) model and Selective Local or Global Segmentation(SLGS) model, the proposed model has the number of iterations reduced by about 57% and 31%, and the segmentation time reduced by about 62% and 14%. The proposed model can quickly and accurately segment noisy images and images with intensity inhomogeneity without initial contour. Besides, it has good segmentation effect on some practical images such as medical images and infrared images.
Reference | Related Articles | Metrics
Analysis of double-channel Chinese sentiment model integrating grammar rules
QIU Ningjia, WANG Xiaoxia, WANG Peng, WANG Yanchun
Journal of Computer Applications    2021, 41 (2): 318-323.   DOI: 10.11772/j.issn.1001-9081.2020050723
Abstract408)      PDF (1093KB)(1050)       Save
Concerning the problem that ignoring the grammar rules reduces the accuracy of classification when using Chinese text to perform sentiment analysis, a double-channel Chinese sentiment classification model integrating grammar rules was proposed, namely CB_Rule (grammar Rules of CNN and Bi-LSTM). First, the grammar rules were designed to extract information with more explicit sentiment tendencies, and the semantic features were extracted by using the local perception feature of Convolutional Neural Network (CNN). After that, considering the problem of possible ignorance of the context when processing rules, Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to extract the global features containing contextual information, and the local features were fused and supplemented, so that the sentimental feature tendency information of CNN model was improved. Finally, the improved features were input into the classifier to perform the sentiment tendency judgment, and the Chinese sentiment model was constructed. The proposed model was compared with R-Bi-LSTM (Bi-LSTM for Chinese sentiment analysis combined with grammar Rules) and SCNN model (a travel review sentiment analysis model that combines Syntactic rules and CNN) on the Chinese e-commerce review text dataset. Experimental results show that the accuracy of the proposed model is increased by 3.7 percentage points and 0.6 percentage points respectively, indicating that the proposed CB_Rule model has a good classification effect.
Reference | Related Articles | Metrics
Fast algorithm for distance regularized level set evolution model
YUAN Quan, WANG Yan, LI Yuxian
Journal of Computer Applications    2020, 40 (9): 2743-2747.   DOI: 10.11772/j.issn.1001-9081.2020010106
Abstract284)      PDF (1693KB)(357)       Save
The gradient descent method has poor convergence and is sensitive to local minimum. Therefore, an improved NAG (Nesterov’s Accelerated Gradient) algorithm was proposed to replace the gradient descent algorithm in the Distance Regularized Level Set Evolution (DRLSE) model, so as to obtain a fast image segmentation algorithm based on NAG algorithm. First, the initial level set evolution equation was given. Second, the gradient was calculated by using the NAG algorithm. Finally, the level set function was updated continuously, avoiding the level set function falling into local minimum. Experimental results show that compared with the original algorithm in the DRLSE model, the proposed algorithm has the number of iterations reduced by about 30%, and the CPU running time reduced by more than 30%. The algorithm is simple to implement, and can be applied to segment the images with high real-time requirement such as infrared images and medical images .
Reference | Related Articles | Metrics
Log analysis and workload characteristic extraction in distributed storage system
GOU Zi'an, ZHANG Xiao, WU Dongnan, WANG Yanqiu
Journal of Computer Applications    2020, 40 (9): 2586-2593.   DOI: 10.11772/j.issn.1001-9081.2020010121
Abstract404)      PDF (1136KB)(663)       Save
Analysis of the workload running on the file system is helpful to optimize the performance of the distributed file system and is crucial to the construction of new storage system. Due to the complexity of workload and the increase of scale diversity, it is incomplete to explicitly capture the characteristics of workload traces by intuition-based analysis. To solve this problem, a distributed log analysis and workload characteristic extraction model was proposed. First, reading and writing related information was extracted from distributed file system logs according to the keywords. Second, the workload characteristics were described from two aspects: statistics and timing. Finally, the possibility of system optimization based on workload characteristics was analyzed. Experimental results show that the proposed model has certain feasibility and accuracy, and can give workload statistics and timing characteristics in detail. It has the advantages of low overhead, high timeliness and being easy to analyze, and can be used to guide the synthesis of workloads with the same characteristics, hot spot data monitoring, and cache prefetching optimization of the system.
Reference | Related Articles | Metrics
Wireless sensor network intrusion detection system based on sequence model
CHENG Xiaohui, NIU Tong, WANG Yanjun
Journal of Computer Applications    2020, 40 (6): 1680-1684.   DOI: 10.11772/j.issn.1001-9081.2019111948
Abstract361)      PDF (656KB)(374)       Save
With the rapid development of Internet of Things (IoT), more and more IoT node devices are deployed, but the accompanying security problem cannot be ignored. Node devices at the network layer of IoT mainly communicate through wireless sensor networks. Compared with the Internet, they are more open and more vulnerable to network attacks such as denial of service. Aiming at the network layer security problem faced by wireless sensor networks, a network intrusion detection system based on sequence model was proposed to detect and alarm the network layer intrusion, which achieved higher recognition rate and lower false positive rate. Besides, aiming at the security problem of the node host device faced by wireless sensor network node devices, with the consideration of the node overhead, a host intrusion detection system based on simple sequence model was proposed. The experimental results show that, the two intrusion detection systems for the network layer and the host layer of wireless sensor network both have the accuracy more than 99%, and the false detection rate about 1%, which meet the industrial requirements. These two proposed systems can comprehensively and effectively protect the wireless sensor network security.
Reference | Related Articles | Metrics
Improved fuzzy c-means MRI segmentation based on neighborhood information
WANG Yan, HE Hongke
Journal of Computer Applications    2020, 40 (4): 1196-1201.   DOI: 10.11772/j.issn.1001-9081.2019091539
Abstract419)      PDF (1675KB)(365)       Save
In the segmentation of brain image,the image quality is often reduced due to the influence of noise or outliers. And traditional fuzzy clustering has some limitations and is easily affected by the initial value,which brings great trouble for doctors to accurately identify and extract brain tissue. Aiming at these problems,an improved fuzzy clustering image segmentation method based on neighborhoods of image pixels constructed by Markov model was proposed. Firstly,the initial clustering center was determined by Genetic Algorithm(AG). Secondly,the expression of the target function was changed,the calculation method of the membership matrix was changed by adding the correction term in the target function and was adjusted by the constraint coefficient. Finally,the Markov Random Field(WRF)was used to represent the label information of the neighborhood pixels,and the maximized conditional probability of Markov random field was used to represent the neighborhood of the pixel,which improves the noise immunity. Experimental results show that the proposed method has good noise immunity,it can reduce the false segmentation rate and has high segmentation accuracy when used to segment brain images. The average accuracy of the segmented image has Jaccard Similarity(JS)index of 82. 76%,Dice index of 90. 45%,and Sensitivity index of 90. 19%. At the same time,the segmentation of brain image boundaries is clearer and the segmented image is closer to the standard segmentation image.
Reference | Related Articles | Metrics
Human activity recognition based on improved particle swarm optimization-support vector machine and context-awareness
WANG Yang, ZHAO Hongdong
Journal of Computer Applications    2020, 40 (3): 665-671.   DOI: 10.11772/j.issn.1001-9081.2019091551
Abstract379)      PDF (754KB)(320)       Save
Concerning the problem of low accuracy of human activity recognition, a recognition method combining Support Vector Machine (SVM) with context-awareness (actual logic or statistical model of human motion state transition) was proposed to identify six types of human activities (walking, going upstairs, going downstairs, sitting, standing, lying). Logical relationships existing between human activity samples were used by the method. Firstly, the SVM model was optimized by using the Improved Particle Swarm Optimization (IPSO) algorithm. Then, the optimized SVM was used to classify the human activities. Finally, the context-awareness was used to correct the error recognition results. Experimental results show that the classification accuracy of the proposed method reaches 94.2% on the Human Activity Recognition Using Smartphones (HARUS) dataset of University of California, Irvine (UCI), which is higher than that of traditional classification method based on pattern recognition.
Reference | Related Articles | Metrics
Semi-supervised learning method for automatic nuclei segmentation using generative adversarial network
CHENG Kai, WANG Yan, LIU Jianfei
Journal of Computer Applications    2020, 40 (10): 2917-2922.   DOI: 10.11772/j.issn.1001-9081.2020020136
Abstract486)      PDF (3833KB)(475)       Save
In order to reduce the dependence on the number of labeled images, a novel semi-supervised learning method was proposed for automatic segmentation of nuclei. Firstly, a novel Convolutional Neural Network (CNN) was used to extract the cell region from the background. Then, a confidence map for the input image was generated by the discriminator network via applying a full convolutional network. At the same time, the adversarial loss and the standard cross-entropy loss were coupled to improve the performance of the segmentation network. Finally, the labeled images and unlabeled images were combined with the confidence maps to train the segmentation network, so that the segmentation network was able to identify the nuclei in the extracted cell regions. Experimental results on 84 images (1/8 of the total images in the training set were labeled, and the rest were unlabeled) showed that the SEGmentation accuracy measurement (SEG) score of the proposed nuclei segmentation method achieved 77.9% and F1 score of the method was 76.0%, which were better than those of the method when using 670 images (all images in the training set were labeled).
Reference | Related Articles | Metrics
Adaptive intensity fitting model for segmentation of images with intensity inhomogeneity
ZHANG Xuyuan, WANG Yan
Journal of Computer Applications    2019, 39 (9): 2719-2725.   DOI: 10.11772/j.issn.1001-9081.2019020364
Abstract431)      PDF (1104KB)(322)       Save

For the segmentation of images with intensity inhomogeneity, a region-adaptive intensity fitting model combining global information was proposed. Firstly, the local and global terms were constructed based on local and global image information respectively. Secondly, an adaptive weight function was defined to indicate the deviation degree of the gray scale of a pixel neighborhood by utilizing the extreme difference level in the pixel neighborhood. Finally, the defined weighting function was used to assign weights to local and global terms adaptively to obtain the energy functional of the proposed model and the iterative equation of the model's level set function was deduced by the variational method. The experimental results show that the proposed model can segment various inhomogeneous images stably and accurately in comparison with Region-Scalable Fitting (RSF) model and Local and Global Intensity Fitting (LGIF) model, which is more robust in the position, size and shape of initial contour of evolution curve.

Reference | Related Articles | Metrics
Segmentation of nasopharyngeal neoplasms based on random forest feature selection algorithm
LI Xian, WANG Yan, LUO Yong, ZHOU Jiliu
Journal of Computer Applications    2019, 39 (5): 1485-1489.   DOI: 10.11772/j.issn.1001-9081.2018102205
Abstract391)      PDF (796KB)(345)       Save
Due to the low grey-level contrast and blurred boundaries of organs in medical images, a Random Forest (RF) feature selection algorithm was proposed to segment nasopharyngeal neoplasms MR images. Firstly, gray-level, texture and geometry information was extracted from nasopharyngeal neoplasms images to construct a random forest classifier. Then, feature importances were measured by the random forest, and the proposed feature selection method was applied to the original handcrafted feature set. Finally, the optimal feature subset obtained from the feature selection process was used to construct a new random forest classifier to make the final segmentation of the images. Experimental results show that the performances of the proposed algorithm are:dice coefficient 79.197%, accuracy 97.702%, sensitivity 72.191%, and specificity 99.502%. By comparing with the conventional random forest based and Deep Convolution Neural Network (DCNN) based segmentation algorithms, it is clearly that the proposed feature selection algorithm can effectively extract useful information from the nasopharyngeal neoplasms MR images and improve the segmentation accuracy of nasopharyngeal neoplasms under small sample circumstance.
Reference | Related Articles | Metrics
Downlink resource scheduling based on weighted average delay in long term evolution system
WANG Yan, MA Xiurong, SHAN Yunlong
Journal of Computer Applications    2019, 39 (5): 1429-1433.   DOI: 10.11772/j.issn.1001-9081.2018081734
Abstract471)      PDF (738KB)(242)       Save
Aiming at the transmission performance requirements of Real-Time (RT) services and Non-Real-Time (NRT) services for multi-user in the downlink transmission of Long Term Evolution (LTE) mobile communication system, an improved Modified Largest Weighted Delay First (MLWDF) scheduling algorithm based on weighted average delay was proposed. On the basis of considering both channel perception and Quality of Service (QoS) perception, a weighted average dealy factor reflecting the state of the user buffer was utilized, which was obtained by the average delay balance of the data to be transmitted and the transmitted data in the user buffer. The RT service with large delay and traffic is prioritized, which improves the user performance experience.Theoretical analysis and link simulation show that the proposed algorithm improves the QoS performance of RT services on the basis of ensuring the delay and fairness of each service. The packet loss rate of RT service of the proposed algorithm decreased by 53.2%, and the average throughput of RT traffic increased by 44.7% when the number of users achieved 50 compared with MLWDF algorithm. Although the throughput of NRT services are sacrificed, it is still better than VT-MLWDF (Virtual Token MLWDF) algorithm. The theoretical analysis and simulation results show that transmission performances and QoS are superior to the comparison algorithm.
Reference | Related Articles | Metrics
Automatic segmentation of nasopharyngeal neoplasm in MR image based on U-net model
PAN Peike, WANG Yan, LUO Yong, ZHOU Jiliu
Journal of Computer Applications    2019, 39 (4): 1183-1188.   DOI: 10.11772/j.issn.1001-9081.2018091908
Abstract432)      PDF (970KB)(323)       Save
Because of the uncertain growth direction and complex anatomical structure for nasopharyngeal tumors, doctors always manually delineate the tumor regions in MR images, which is time-consuming and the delineation result heavily depends on the experience of doctors. In order to solve this problem, based on deep learning algorithm, a U-net based MR image automatic segmentation algorithm of nasopharyngeal tumors was proposed, in which the max-pooling operation in original U-net model was replaced by the convolution operation to keep more feature information. Firstly,the regions of 128×128 were extracted from all slices with tumor regions of the patients as data samples. Secondly, the patient samples were divided into training sample set and testing sample set, and data augmentation was performed on the training samples. Finally, all the training samples were used to train the model. To evaluate the performance of the proposed U-net based model, all slices of patients in testing sample set were selected for segmentation, and the final average results are:Dice Similarity Coefficient (DSC) is 80.05%, Prevent Match (PM) coefficient is 85.7%, Correspondence Ratio (CR) coefficient is 71.26% and Average Symmetric Surface Distance (ASSD) is 1.1568. Compared with Convolutional Neural Network (CNN) based model, DSC, PM and CR coefficients of the proposed method are increased by 9.86 percentage points, 19.61 percentage points and 16.02 percentage points respectively, and ASSD is decreased by 0.4364. Compared with Fully Convolutional Network (FCN) model and max-pooling based U-net model, DSC and CR coefficients of the proposed method achieve the best results, while PM coefficient is 2.55 percentage points lower than the maximum value in the two comparison models, and ASSD is slightly higher than the minimum value of the two comparison models by 0.0046. The experimental results show that the proposed model can achieve good segmentation results of nasopharyngeal neoplasm, which assists doctors in diagnosis.
Reference | Related Articles | Metrics
Non-rigid multi-modal brain image registration by using improved Zernike moment based local descriptor and graph cuts discrete optimization
WANG Lifang, WANG Yanli, LIN Suzhen, QIN Pinle, GAO Yuan
Journal of Computer Applications    2019, 39 (2): 582-588.   DOI: 10.11772/j.issn.1001-9081.2018061423
Abstract359)      PDF (1232KB)(250)       Save
When noise and intensity distortion exist in brain images, the method based on structural information cannot accurately extract image intensity information, edge and texture features at the same time. In addition, the computational complexity of continuous optimization is relatively high. To solve these problems, according to the structural information of the image, a non-rigid multi-modal brain image registration method based on Improved Zernike Moment based Local Descriptor (IZMLD) and Graph Cuts (GC) discrete optimization was proposed. Firstly, the image registration problem was regarded as the discrete label problem of Markov Random Field (MRF), and the energy function was constructed. The two energy terms were composed of the pixel similarity and smoothness of the displacement vector field. Secondly, a smoothness constraint based on the first derivative of the deformation vector field was used to penalize displacement labels with sharp changes between adjacent pixels. The similarity metric based on IZMLD was used as a data item to represent pixel similarity. Thirdly, the Zernike moments of the image patches were used to calculate the self-similarity of the reference image and the floating image in the local neighborhood and construct an effective local descriptor. The Sum of Absolute Difference (SAD) between the descriptors was taken as the similarity metric. Finally, the whole energy function was discretized and its minimum value was obtained by using an extended optimization algorithm of GC. The experimental results show that compared with the registration method based on the Sum of Squared Differences on Entropy images (ESSD), the Modality Independent Neighborhood Descriptor (MIND) and the Stochastic Second-Order Entropy Image (SSOEI), the mean of the target registration error of the proposed method was decreased by 18.78%, 10.26% and 8.89% respectively; and the registration time of the proposed method was shortened by about 20 s compared to the continuous optimization algorithm. The proposed method achieves efficient and accurate registration for images with noise and intensity distortion.
Reference | Related Articles | Metrics
Human interaction recognition based on RGB and skeleton data fusion model
JI Xiaofei, QIN Linlin, WANG Yangyang
Journal of Computer Applications    2019, 39 (11): 3349-3354.   DOI: 10.11772/j.issn.1001-9081.2019040633
Abstract473)      PDF (993KB)(344)       Save
In recent years, significant progress has been made in human interaction recognition based on RGB video sequences. Due to its lack of depth information, it cannot obtain accurate recognition results for complex interactions. The depth sensors (such as Microsoft Kinect) can effectively improve the tracking accuracy of the joint points of the whole body and obtain three-dimensional data that can accurately track the movement and changes of the human body. According to the respective characteristics of RGB and joint point data, a convolutional neural network structure model based on RGB and joint point data dual-stream information fusion was proposed. Firstly, the region of interest of the RGB video in the time domain was obtained by using the Vibe algorithm, and the key frames were extracted and mapped to the RGB space to obtain the spatial-temporal map representing the video information. The map was sent to the convolutional neural network to extract features. Then, a vector was constructed in each frame of the joint point sequence to extract the Cosine Distance (CD) and Normalized Magnitude (NM) features. The cosine distance and the characteristics of the joint nodes in each frame were connected in time order of the joint point sequence, and were fed into the convolutional neural network to learn more advanced temporal features. Finally, the softmax recognition probability matrixes of the two information sources were fused to obtain the final recognition result. The experimental results show that combining RGB video information with joint point information can effectively improve the recognition result of human interaction behavior, and achieves 92.55% and 80.09% recognition rate on the international public SBU Kinect interaction database and NTU RGB+D database respectively, verifying the effectiveness of the proposed model for the identification of interaction behaviour between two people.
Reference | Related Articles | Metrics
Image matching method with illumination robustness
WANG Yan, LYU Meng, MENG Xiangfu, LI Yuhao
Journal of Computer Applications    2019, 39 (1): 262-266.   DOI: 10.11772/j.issn.1001-9081.2018061210
Abstract460)      PDF (774KB)(250)       Save
Focusing on the problem that current image matching algorithm based on local feature has low correct rate of illumination change sensitive matching, an image matching algorithm with illumination robustness was proposed. Firstly, a Real-Time Contrast Preserving decolorization (RTCP) algorithm was used for grayscale image, and then a contrast stretching function was used to simulate the influence of different illumination transformation on image to extract feature points of anti-illumination transformation. Finally, a feature point descriptor was established by using local intensity order pattern. According to the Euclidean distance of local feature point descriptor of image to be matched, the Euclidean distance was determined to be a pair matching point. In open dataset, the proposed algorithm was compared with Scale Invariant Feature Transform (SIFT) algorithm, Speeded Up Robust Feature (SURF) algorithm, the "wind" (KAZE) algorithm and ORB (Oriented FAST and Rotated, BRIEF) algorithm in matching speed and accuracy. The experimental results show that with the increase of image brightness difference, SIFT algorithm, SURF algorithm, the "wind" algorithm and ORB algorithm reduce matching accuracy rapidly, and the proposed algorithm decreases matching accuracy slowly and the accuracy is higher than 80%. The proposed algorithm is slower to detect feature points and has a higher descriptor dimension, with an average time of 23.47 s. The matching speed is not as fast as the other four algorithms, but the matching quality is much better than them. The proposed algorithm can overcome the influence of illumination change on image matching.
Reference | Related Articles | Metrics
Binocular camera multi-pose calibration method based on radial alignment constraint algorithm
YANG Shangkun, WANG Yansong, GUO Hui, WANG Xiaolan, LIU Ningning
Journal of Computer Applications    2018, 38 (9): 2655-2659.   DOI: 10.11772/j.issn.1001-9081.2018020503
Abstract944)      PDF (720KB)(378)       Save
In binocular stereo vision, the camera needs to be calibrated to obtain its internal and external parameters in 3D measurement or precise positioning of the object.Through the study of the camera model with first-order radial distortion, linear formulas of solving internal and external parameters were constructed based on Radial Alignment Constraint (RAC) calibration method. Inclination angle, rotation angle, pitch angle and main distortion elements of lens were taken into consideration in this algorithm, which modified the defects in the traditional RAC calibration method that it only considers radial distortion and some parameters need priori values. The 3D reconstruction experiment of multi-pose binocular camera was carried out by using the obtained internal and external parameters. The experimental results show that,the reprojection error of this calibration method is distributed in[-0.3,0.3], and the similarity between the measurement trajectory and the actual trajectory is 96%, which has a positive effect on reducing the error rate of binocular stereo vision 3D measurement.
Reference | Related Articles | Metrics
Quality evaluation model of network operation and maintenance based on correlation analysis
WU Muyang, LIU Zheng, WANG Yang, LI Yun, LI Tao
Journal of Computer Applications    2018, 38 (9): 2535-2542.   DOI: 10.11772/j.issn.1001-9081.2018020412
Abstract571)      PDF (1421KB)(355)       Save
Traditional network operation and maintenance evaluation method has two problems. First, it is too dependent on domain experts' experience in indicator selection and weight assignment, so that it is difficult to obtain accurate and comprehensive assessment results. Second, the network operation and maintenance quality involves data from multiple manufacturers or multiple devices in different formats and types, and a surge of users brings huge amounts of data. To solve the problems mentioned above, an indicator selection method based on correlation was proposed. The method focuses on the steps of indicator selection in the process of evaluation. By comparing the strength of the correlation between the data series of indicators, the original indicators could be classified into different clusters, and then the key indicators in each cluster could be selected to construct a key indicators system. The data processing methods and weight determination methods without human participation were also utilized into the network operation and maintenance quality evaluation model. In the experiments, the indicators selected by the proposed method cover 72.2% of the artificial indicators. The information overlap rate is 31% less than the manual indicators'. The proposed method can effectively reduce human involvement, and has a higher prediction accuracy for the alarm.
Reference | Related Articles | Metrics
Fast iterative learning control for regular system in sense of Lebesgue- p norm
CAO Wei, LI Yandong, WANG Yanwei
Journal of Computer Applications    2018, 38 (9): 2455-2458.   DOI: 10.11772/j.issn.1001-9081.2018020439
Abstract504)      PDF (728KB)(310)       Save
Focused on the problem that the convergence speed of traditional iterative learning control algorithm used in linear regular systems is slow, a kind of fast iterative learning control algorithm was designed for a class of linear regular systems. Compared with the traditional P-type iterative learning control algorithm, the algorithm increases tracking error at neighboring two iterations generated from last difference signal and present difference signal. And the convergence of the algorithm was proven by using Yong inequality of convolutional inference in the sense of Lebesgue- p norm. The results show the tracking error of the system will converge to zero with infinite iterations. The convergence condition is also given. Compared with P-type iterative learning control, the proposed algorithm can fasten the convergence and avoid the shortcomings of using λ norm to measure the tracking error. Simulation further testifies the validity and effectiveness.
Reference | Related Articles | Metrics
Bearing fault diagnosis method based on Gibbs sampling
WANG Yan, LUO Qian, DENG Hui
Journal of Computer Applications    2018, 38 (7): 2136-2140.   DOI: 10.11772/j.issn.1001-9081.2018010035
Abstract469)      PDF (804KB)(316)       Save
To suppress judgment one-sidedness in the existing bearing fault diagnosis method, a bearing fault diagnosis method based on Gibbs sampling was proposed. Firstly, the bearing vibration signal was decomposed by Local Characteristic Scale Decomposition (LCD) to obtain Intrinsic Scale Components (ISC). Secondly, the time domain features were extracted from the bearing vibration signal and ISC, and the time domain features were ranked according to feature sensitivity level. The top ranked features were selected to make up feature sets. Thirdly, feature set training was used to generate a multi-dimensional Gaussian distribution model based on Gibbs sampling. Finally, posterior analysis was used to obtain the probability to realize bearing fault diagnosis. The experimental results show that the diagnostic accuracy of the proposed method reaches 100%; compared with the bearing diagnosis method based on SVM (Support Vector Machine), the diagnostic accuracy is improved by 11.1 percentage points when the number of features is 43. The proposed method can effectively diagnose rolling bearing faults, and it also has good diagnostic effect on high-dimensional and complex bearing fault data.
Reference | Related Articles | Metrics
Online signature verification based on curve segment similarity matching
LIU Li, ZHAN Enqi, ZHENG Jianbin, WANG Yang
Journal of Computer Applications    2018, 38 (4): 1046-1050.   DOI: 10.11772/j.issn.1001-9081.2017092186
Abstract386)      PDF (930KB)(352)       Save
Aiming at the problems of mismatching and too large matching distance because of curves scaling, shifting, rotation and non-uniform sampling in the process of online signature verification, a curve segment similarity matching method was proposed. In the progress of online signature verification, two curves were partitioned into segments and matched coarsely at first. A dynamic programming algorithm based on cumulative difference matrix of windows was introduced to get the matching relationship. Then, the similarity distance for each matching pair and weighted sum of all the matching pairs were calculated, and the calculating method is to fit each curve of matching pairs, carry out the similarity transformation within a certain range, and resample the curves to get the Euclidean distance. Finally, the average of the similarity distance between test signature and all template signatures was used as the authentication distance, which was compared with the training threshold to judge the authenticity. The method was validated on the open databases SUSIG Visual and SUSIG Blind respectively with 3.56% and 2.44% Equal Error Rate (EER) when using personalized threshold, and the EER was reduced by about 14.4% on Blind data set compared with the traditional Dynamic Time Wraping (DTW) method. The experimental results show that the proposed method has certain advantages in skilled forgery signature and random forgery signature verification.
Reference | Related Articles | Metrics
Dynamic threshold signature scheme based on Chinese remainder theorem
WANG Yan, HOU Zhengfeng, ZHANG Xueqi, HUANG Mengjie
Journal of Computer Applications    2018, 38 (4): 1041-1045.   DOI: 10.11772/j.issn.1001-9081.2017092242
Abstract593)      PDF (761KB)(419)       Save
To resist mobile attacks, a new dynamic threshold signature scheme based on Chinese Remainder Theorem (CRT) was proposed. Firstly, members exchanged their shadows to generate their private keys and the group public key. Secondly, a partial signature was generated by cooperation. Finally, the partial signature was used to synthesize the signature. The scheme does not expose the group private key in the signature process, so that the group private key can be reused. The members update their private keys periodically without changing the group public key to ensure that the signature is still valid before update. Besides, the scheme allows new members to join while keeping the old member's private keys and group private key unexposed. The scheme has forward security, which can resist mobile attacks effectively. Theoretical analysis and simulation results show that, compared with the proactive threshold scheme based on Lagrange interpolation, the updating time consumption of the proposed scheme is constant, therefore the scheme has time efficiency.
Reference | Related Articles | Metrics
Data updating method for cloud storage based on ciphertext-policy attribute-based encryption
LIU Rong, PAN Hongzhi, LIU Bo, ZU Ting, FANG Qun, HE Xin, WANG Yang
Journal of Computer Applications    2018, 38 (2): 348-351.   DOI: 10.11772/j.issn.1001-9081.2017071856
Abstract508)      PDF (763KB)(432)       Save
Cloud computing data are vulnerable to be theft illegally and tampered maliciously. To solve these problems, a Dynamic Updating Ciphertext-Policy Attribute-Based Encryption (DU-CPABE) scheme which enables both data dynamic updating and security protection was proposed. Firstly, by using linear partitioning algorithm, data information was divided into fixed size blocks. Secondly, the data blocks were encrypted by using Ciphertext-Policy Attribute-Based Encryption (CP-ABE) algorithm. Finally, based on conventional Merkle Hash Tree (MHT), an Address-MHT (A-MHT) was proposed for the operation of dynamically updating data in cloud computing. The theoretical analysis proved the security of the scheme, and the simulation in ideal channel showed that, for five updates, compared with CP-ABE method, the average time overhead of data update was decreased by 14.6%. The experimental results show that the dynamic updating of DU-CPABE scheme in cloud computng services can effectively reduce data update time and system overhead.
Reference | Related Articles | Metrics
Local image intensity fitting model combining global image information
CHEN Xing, WANG Yan, WU Xuan
Journal of Computer Applications    2018, 38 (12): 3574-3579.   DOI: 10.11772/j.issn.1001-9081.2018040834
Abstract516)      PDF (1081KB)(407)       Save
The Local Image Fitting (LIF) model is sensitive to the size, shape and position of initial contour. In order to solve the problem, a local image intensity fitting model combined with global information was proposed. Firstly, a global term based on global image information was constructed. Secondly, the global term was linearly combined with the local term of LIF model. Finally, an image segmentation model in the form of partial differential equation was obtained. Finite difference method was used in numerical implementation, simultaneously, a level set function was regularized by a Gaussian filter to ensure the smoothness of the level set function. In the segmentation experiments, when different initial contours are selected, the proposed model can get the correct segmentation results, and its segmentation time is only 20% to 50% of LIF model. The experimental results show that, the proposed model is not sensitive to the size, shape and position of the initial contour of evolutionary curve, it can effectively segment images with intensity inhomogeneity, and its segmentation speed is faster. In addition, the proposed model can segment some real and synthetic images quickly without initial contours.
Reference | Related Articles | Metrics
Resource allocation framework for heterogeneous wireless network based on software defined network
WU Shikui, WANG Yan
Journal of Computer Applications    2018, 38 (11): 3293-3298.   DOI: 10.11772/j.issn.1001-9081.2018040826
Abstract565)      PDF (889KB)(437)       Save
For the popularity of various smart devices in mobile cellular network, and the problem of increasing mobile traffic, the control of radio bandwidth was studied and the radio bandwidth was assigned to multiple radio user equipments. A resource allocation framework based on Software Defined Network (SDN), and a heterogeneous resource allocation algorithm in the LTE/WLAN (Long Term Evolution/Wireless Local Area Network) radio network were proposed. The SDN framework was applied to the heterogeneous resource allocation of LTE-WLAN integrated network, and the framework was extended, and the heterogeneous radio frequency bandwidth in the LTE/WLAN multiple radio network was allocated in a holistic way. Heterogeneous resources could be processed by decomposing the function of centralized solutions to the designated network entities. Simulation experiments show that the proposed framework can better balance network throughput and user fairness, and the algorithm has better convergence.
Reference | Related Articles | Metrics
Joint channel non-coherent network coded modulation method
GAO Fengyue, WANG Yan, LI Mu, YU Rui
Journal of Computer Applications    2018, 38 (10): 2955-2959.   DOI: 10.11772/j.issn.1001-9081.2018030591
Abstract486)      PDF (894KB)(265)       Save
For physical-layer network coding over time-varying bi-directional relay channels, a joint channel coding and non-coherent physical-layer network coded modulation and detection scheme without channel state information was designed in multiple-antenna environment. Firstly, the spatial modulation matrix at the source was designed to achieve physical-layer network coding. Then, a differential spatial modulation was combined with physical-layer network coding and maximum a posteriori probability of superimposed signal was derived at the relay. Moreover, considering the constellation of the superimposed signal, a mapping function to map superimposed signal to broadcasting signal was designed. Lastly, taking advantage of the linear structure of channel coding, and combining bit interleaving, channel decoding and soft-input soft-output detection algorithm, an iterative detection approach for joint channel differential physical-layer network coding was obtained. Simulation results show that the proposed scheme can achieve non-coherent transmission and detection used to physical-layer network coding for two-way relay channels and can effectively enhance the throughput and spectral efficiency of the system.
Reference | Related Articles | Metrics
E-government recommendation algorithm combining community and association sequence mining
HUANG Yakun, WANG Yang, WANG Mingxing
Journal of Computer Applications    2017, 37 (9): 2671-2677.   DOI: 10.11772/j.issn.1001-9081.2017.09.2671
Abstract475)      PDF (1147KB)(457)       Save
Personalized recommendation as an effective means of information gathering has been successfully applied to e-commerce, music and film and other fields. Most of the studies have focused on the recommended accuracy, lack of consideration of the diversity of recommended results, and neglected the process characteristics of the recommended items in the application area (e. g. "Internet of Things plus E-government"). Aiming at this problem, an e-government recommendation algorithm Combining User Community and Associated Sequence mining (CAS-UC) was proposed to recommend the items most associated with users. Firstly, the static basic attributes and dynamic behavior attributes of the users and items were modeled separately. Secondly, based on the user's historical record and attribute similarity for user community discovery, the user set most similar to the target user was pre-filtered to improve the diversity of the recommended results and reduce the computational amount of the core recommendation process. Finally, the associated sequence mining of the items was taken full account of the business characteristics of e-government, and the item sequence mining with time dimension was added to further improve the accuracy of the recommended results. The simulation experiments were carried out with the information after desensitization of users on the Spark platform of ewoho.com in Wuhu. The experimental results show that CAS-UC is suitable for the recommendation of items with sequence or process characteristics, and has higher recommendation accuracy compared with traditional recommendation algorithms such as cooperative filtering recommendation, matrix factorization and recommendation algorithm based on semantic similarity. The multi-community attribution factor of the user increases the diversity of the recommended results.
Reference | Related Articles | Metrics
Yac:yet another distributed consensus algorithm
ZHANG Jian, WANG Yang, LIU Dandan
Journal of Computer Applications    2017, 37 (9): 2524-2530.   DOI: 10.11772/j.issn.1001-9081.2017.09.2524
Abstract1458)      PDF (1104KB)(693)       Save
There are serious load imbalance and single point performance bottleneck effect in the traditional static topology leader-based distributed consensus algorithm, and the algorithm is unable to work properly when the number of breakdown nodes is larger than 50% of the cluster size. To solve the above problems, a distributed consensus algorithm (Yac) based on dynamic topology and limited voting was proposed. The algorithm dynamically generated the membership subset and Leader nodes to participate in the consensus voting, and varied with time, achieving statistical load balance. With removal of the strong constraints of all the majority of members to participate in voting, the algorithm had a higher degree of failure tolerance. The security constraints of the algorithm were reestablished by the log chain mechanism, and the correctness of the algorithm was proved. The experimental results show that the load concentration effect of single point in the improved algorithm is significantly lower than that of the mainstream static topology leader-based distributed consensus algorithm Zookeeper. The improved algorithm has better fault tolerance than Zookeeper in most cases and maintains the same as Zookeeper in the worst case. Under the same cluster size, the improved algorithm has higher throughput upper limit than Zookeeper.
Reference | Related Articles | Metrics
Dimension reduction method of brain network state observation matrix based on Spectral Embedding
DAI Zhaokun, LIU Hui, WANG Wenzhe, WANG Yanan
Journal of Computer Applications    2017, 37 (8): 2410-2415.   DOI: 10.11772/j.issn.1001-9081.2017.08.2410
Abstract489)      PDF (1084KB)(579)       Save
As the brain network state observation matrix based on functional Magnetic Resonance Imaging (fMRI) reconstruction is high-dimensional and characterless, a method of dimensionality reduction based on Spectral Embedding was presented. Firstly, the Laplacian matrix was constructed from the similarity measurement between the samples. Secondly, in order to achieve the purpose of mapping (reducing dimension) datasets from high dimension to low dimension, the first two main eigenvectors were selected to construct a two-dimensional eigenvector space through Laplacian matrix factorization. The method was applied to reduce the dimension of the matrix and visualize it in two-dimensional space, and the results were evaluated by category validity indicators. Compared with the dimensionality reduction algorithms such as Principal Component Analysis (PCA), Locally Linear Embedding (LLE), Isometric Mapping (Isomap), the mapping points in the low dimensional space got by the proposed method have obvious category significance. According to the category validity indicators, compared with Multi-Dimensional Scaling (MDS) and t-distributed Stochastic Neighbor Embedding (t-SNE) algorithms, the Di index (the average distance among within-class samples) of the proposed method was decreased by 87.1% and 65.2% respectively, and the Do index (the average distance among between-class samples) of it was increased by 351.3% and 25.5% respectively. Finally, the visualization results of dimensionality reduction show a certain regularity through a number of samples, and the effectiveness and universality of the proposed method are validated.
Reference | Related Articles | Metrics
Residential electricity consumption analysis based on regularized matrix factorization
WANG Yang, WU Fan, YAO Zongqiang, LIU Jie, LI Dong
Journal of Computer Applications    2017, 37 (8): 2405-2409.   DOI: 10.11772/j.issn.1001-9081.2017.08.2405
Abstract701)      PDF (757KB)(778)       Save
Focusing on the electricity user group feature, a residential electricity consumption analysis method based on geographic regularized matrix factorization in smart grid was proposed to explore the characteristics of electricity users and provide decision support for personalized better power dispatching. In the proposed algorithm, customers were firstly mapped into a hidden feature space, which could represent the characteristics of users' electricity behavior, and then k-means clustering algorithm was employed to segment customers in the hidden feature space. In particular, geographic information was innovatively introduced as a regularization factor of matrix factorization, which made the hidden feature space not only meet the orthogonal characteristics of user groups, but also make the geographically close users mapping close in hidden feature space, consistent with the real physical space. In order to verify the effectiveness of the proposed algorithm, it was applied to the real residential data analysis and mining task of smart grid application in Sino-Singapore Tianjin Eco-City (SSTEC). The experimental results show that compared to the baseline algorithms including Vector Space Model (VSM) and Nonnegative Matrix Factorization (NMF) algorithm, the proposed algorithm can obtain better clustering results of user segmentation and dig out certain power modes of different user groups, and also help to improve the level of management and service of smart grid.
Reference | Related Articles | Metrics
Obfuscating algorithm based on congruence equation and improved flat control flow
WANG Yan, HUANG Zhangjin, GU Naijie
Journal of Computer Applications    2017, 37 (6): 1803-1807.   DOI: 10.11772/j.issn.1001-9081.2017.06.1803
Abstract499)      PDF (720KB)(692)       Save
Aiming at the simple obfuscating result of the existing control flow obfuscating algorithm, an obfuscating algorithm based on congruence equation and improved flat control flow was presented. First of all, a kind of opaque predicate used in the basic block of the source code was created by using secret keys and a group of congruence equation. Then, a new algorithm for creating N-state opaque predicate was presented based on Logistic chaotic mapping. The proposed algorithm was applied to the existing flat control flow algorithm for improving it. Finally, according to the combination of the above two proposed algorithms for obfuscating the source code, the complexity of the flat control flow in the code was increased and make it more difficult to be cracked. Compared with the flat control flow algorithm based on chaotic opaque predicate, the code's tamper-proof attack time of the obfuscated code was increased by above 22% on average and its code's total cyclomatic complexity was improved by above 34% on average by using the proposed obfuscating algorithm. The experimental results show that, the proposed algorithm can guarantee the correctness of execution result of the obfuscated program and has a high cyclomatic complexity, so it can effectively resist static and dynamic attacks.
Reference | Related Articles | Metrics